Goto

Collaborating Authors

 hard negative mixing


Hard Negative Mixing for Contrastive Learning

Neural Information Processing Systems

Contrastive learning has become a key component of self-supervised learning approaches for computer vision. By learning to embed two augmented versions of the same image close to each other and to push the embeddings of different images apart, one can train highly transferable visual representations. As revealed by recent studies, heavy data augmentation and large sets of negatives are both crucial in learning such representations. At the same time, data mixing strategies, either at the image or the feature level, improve both supervised and semi-supervised learning by synthesizing novel examples, forcing networks to learn more robust features. In this paper, we argue that an important aspect of contrastive learning, i.e. the effect of hard negatives, has so far been neglected. To get more meaningful negative samples, current top contrastive self-supervised learning approaches either substantially increase the batch sizes, or keep very large memory banks; increasing memory requirements, however, leads to diminishing returns in terms of performance. We therefore start by delving deeper into a top-performing framework and show evidence that harder negatives are needed to facilitate better and faster learning. Based on these observations, and motivated by the success of data mixing, we propose hard negative mixing strategies at the feature level, that can be computed on-the-fly with a minimal computational overhead. We exhaustively ablate our approach on linear classification, object detection, and instance segmentation and show that employing our hard negative mixing procedure improves the quality of visual representations learned by a state-of-the-art self-supervised learning method.


Review for NeurIPS paper: Hard Negative Mixing for Contrastive Learning

Neural Information Processing Systems

Weaknesses: In general, my biggest concern is that the efficacy of the proposed method is not very significant. For example, can the proposed module directly be dropped in to improve [25] and [28]? In line 233-234, the explanation here is a bit counter-intuitive. I understand this is possible if inter-class distance increases more than intra-class variance, but can you plot it to justify it better? Otherwise, increasing intra-class variance should lead to lower accuracy.


Review for NeurIPS paper: Hard Negative Mixing for Contrastive Learning

Neural Information Processing Systems

All the reviewers agree this paper is strong. Due to the popularity of contrastive learning and the ideas in this paper, the paper will be timely at NeurIPS. The reviewers also praised the strong empirical results, especially over established baselines. The careful analysis and use of hard negative mining leads to these strong results, as noted by the reviewers.


Hard Negative Mixing for Contrastive Learning

Neural Information Processing Systems

Contrastive learning has become a key component of self-supervised learning approaches for computer vision. By learning to embed two augmented versions of the same image close to each other and to push the embeddings of different images apart, one can train highly transferable visual representations. As revealed by recent studies, heavy data augmentation and large sets of negatives are both crucial in learning such representations. At the same time, data mixing strategies, either at the image or the feature level, improve both supervised and semi-supervised learning by synthesizing novel examples, forcing networks to learn more robust features. In this paper, we argue that an important aspect of contrastive learning, i.e. the effect of hard negatives, has so far been neglected.